Goto

Collaborating Authors

 optimistic regret minimization


Optimistic Regret Minimization for Extensive-Form Games via Dilated Distance-Generating Functions

Neural Information Processing Systems

We study the performance of optimistic regret-minimization algorithms for both minimizing regret in, and computing Nash equilibria of, zero-sum extensive-form games. In order to apply these algorithms to extensive-form games, a distance-generating function is needed. We study the use of the dilated entropy and dilated Euclidean distance functions. For the dilated Euclidean distance function we prove the first explicit bounds on the strong-convexity parameter for general treeplexes. Furthermore, we show that the use of dilated distance-generating functions enable us to decompose the mirror descent algorithm, and its optimistic variant, into local mirror descent algorithms at each information set. This decomposition mirrors the structure of the counterfactual regret minimization framework, and enables important techniques in practice, such as distributed updates and pruning of cold parts of the game tree. Our algorithms provably converge at a rate of $T^{-1}$, which is superior to prior counterfactual regret minimization algorithms. We experimentally compare to the popular algorithm CFR+, which has a theoretical convergence rate of $T^{-0.5}$ in theory, but is known to often converge at a rate of $T^{-1}$, or better, in practice. We give an example matrix game where CFR+ experimentally converges at a relatively slow rate of $T^{-0.74}$,


Reviews: Optimistic Regret Minimization for Extensive-Form Games via Dilated Distance-Generating Functions

Neural Information Processing Systems

Through that lens, I liked this paper. This paper considers the application of optimistic regret-minimization algorithms to extensive form games. It follows a similar setup as CFR, the current SOTA algorithm for solving large games, which decomposes the overall regret-minimization problem into independent subproblems at each information set. Theoretically, the new algorithms should have faster convergence than the CFR variant of CFR. Empirically, the authors show that these methods converge dramatically faster than CFR in small games, but slower in the slightly larger game of Leduc hold'em.


Reviews: Optimistic Regret Minimization for Extensive-Form Games via Dilated Distance-Generating Functions

Neural Information Processing Systems

This paper proposes the use of "dilated" distance generating functions to write the associated mirror descent algorithm as a local regret minimization procedure on the decision nodes in two-person zero-sum extensive form games. After the discussion phase, the reviewers and myself did not have any major concerns for the paper and I am happy to recommend acceptance (modulo the reviewers' specific suggestions for points to be revised in the camera-ready).


Optimistic Regret Minimization for Extensive-Form Games via Dilated Distance-Generating Functions

Neural Information Processing Systems

We study the performance of optimistic regret-minimization algorithms for both minimizing regret in, and computing Nash equilibria of, zero-sum extensive-form games. In order to apply these algorithms to extensive-form games, a distance-generating function is needed. We study the use of the dilated entropy and dilated Euclidean distance functions. For the dilated Euclidean distance function we prove the first explicit bounds on the strong-convexity parameter for general treeplexes. Furthermore, we show that the use of dilated distance-generating functions enable us to decompose the mirror descent algorithm, and its optimistic variant, into local mirror descent algorithms at each information set.


Optimistic Regret Minimization for Extensive-Form Games via Dilated Distance-Generating Functions

Farina, Gabriele, Kroer, Christian, Sandholm, Tuomas

Neural Information Processing Systems

We study the performance of optimistic regret-minimization algorithms for both minimizing regret in, and computing Nash equilibria of, zero-sum extensive-form games. In order to apply these algorithms to extensive-form games, a distance-generating function is needed. We study the use of the dilated entropy and dilated Euclidean distance functions. For the dilated Euclidean distance function we prove the first explicit bounds on the strong-convexity parameter for general treeplexes. Furthermore, we show that the use of dilated distance-generating functions enable us to decompose the mirror descent algorithm, and its optimistic variant, into local mirror descent algorithms at each information set.